Tendon-driven robots, where one or more tendons under tension bend and manipulate a flexible backbone, can improve minimally invasive surgeries involving difficult-to-reach regions in the human body. Planning motions safely within constrained anatomical environments requires accuracy and efficiency in shape estimation and collision checking. Tendon robots that employ arbitrarily-routed tendons can achieve complex and interesting shapes, enabling them to travel to difficult-to-reach anatomical regions. Arbitrarily-routed tendon-driven robots have unintuitive nonlinear kinematics. Therefore, we envision clinicians leveraging an assistive interactive-rate motion planner to automatically generate collision-free trajectories to clinician-specified destinations during minimally-invasive surgical procedures. Standard motion-planning techniques cannot achieve interactive-rate motion planning with the current expensive tendon robot kinematic models. In this work, we present a 3-phase motion-planning system for arbitrarily-routed tendon-driven robots with a Precompute phase, a Load phase, and a Supervisory Control phase. Our system achieves an interactive rate by developing a fast kinematic model (over 1,000 times faster than current models), a fast voxel collision method (27.6 times faster than standard methods), and leveraging a precomputed roadmap of the entire robot workspace with pre-voxelized vertices and edges. In simulated experiments, we show that our motion-planning method achieves high tip-position accuracy and generates plans at 14.8 Hz on average in a segmented collapsed lung pleural space anatomical environment. Our results show that our method is 17,700 times faster than popular off-the-shelf motion planning algorithms with standard FK and collision detection approaches. Our open-source code is available online.
translated by 谷歌翻译
Steerable needles are capable of accurately targeting difficult-to-reach clinical sites in the body. By bending around sensitive anatomical structures, steerable needles have the potential to reduce the invasiveness of many medical procedures. However, inserting these needles with curved trajectories increases the risk of tissue damage due to perpendicular forces exerted on the surrounding tissue by the needle's shaft, potentially resulting in lateral shearing through tissue. Such forces can cause significant damage to surrounding tissue, negatively affecting patient outcomes. In this work, we derive a tissue and needle force model based on a Cosserat string formulation, which describes the normal forces and frictional forces along the shaft as a function of the planned needle path, friction model and parameters, and tip piercing force. We propose this new force model and associated cost function as a safer and more clinically relevant metric than those currently used in motion planning for steerable needles. We fit and validate our model through physical needle robot experiments in a gel phantom. We use this force model to define a bottleneck cost function for motion planning and evaluate it against the commonly used path-length cost function in hundreds of randomly generated 3-D environments. Plans generated with our force-based cost show a 62% reduction in the peak modeled tissue force with only a 0.07% increase in length on average compared to using the path-length cost in planning. Additionally, we demonstrate the ability to plan motions with our force-based cost function in a lung tumor biopsy scenario from a segmented computed tomography (CT) scan. By planning motions for the needle that aim to minimize the modeled needle-to-tissue force explicitly, our method plans needle paths that may reduce the risk of significant tissue damage while still reaching desired targets in the body.
translated by 谷歌翻译
A computational graph in a deep neural network (DNN) denotes a specific data flow diagram (DFD) composed of many tensors and operators. Existing toolkits for visualizing computational graphs are not applicable when the structure is highly complicated and large-scale (e.g., BERT [1]). To address this problem, we propose leveraging a suite of visual simplification techniques, including a cycle-removing method, a module-based edge-pruning algorithm, and an isomorphic subgraph stacking strategy. We design and implement an interactive visualization system that is suitable for computational graphs with up to 10 thousand elements. Experimental results and usage scenarios demonstrate that our tool reduces 60% elements on average and hence enhances the performance for recognizing and diagnosing DNN models. Our contributions are integrated into an open-source DNN visualization toolkit, namely, MindInsight [2].
translated by 谷歌翻译
Dialect differences caused by regional, social, and economic barriers cause performance discrepancies for many groups of users of language technology. Fair, inclusive, and equitable language technology must critically be dialect invariant, meaning that performance remains constant over dialectal shifts. Current English systems often fall significantly short of this ideal since they are designed and tested on a single dialect: Standard American English. We introduce Multi-VALUE -- a suite of resources for evaluating and achieving English dialect invariance. We build a controllable rule-based translation system spanning 50 English dialects and a total of 189 unique linguistic features. Our translation maps Standard American English text to synthetic form of each dialect, which uses an upper-bound on the natural density of features in that dialect. First, we use this system to build stress tests for question answering, machine translation, and semantic parsing tasks. Stress tests reveal significant performance disparities for leading models on non-standard dialects. Second, we use this system as a data augmentation technique to improve the dialect robustness of existing systems. Finally, we partner with native speakers of Chicano and Indian English to release new gold-standard variants of the popular CoQA task.
translated by 谷歌翻译
In an era of countless content offerings, recommender systems alleviate information overload by providing users with personalized content suggestions. Due to the scarcity of explicit user feedback, modern recommender systems typically optimize for the same fixed combination of implicit feedback signals across all users. However, this approach disregards a growing body of work highlighting that (i) implicit signals can be used by users in diverse ways, signaling anything from satisfaction to active dislike, and (ii) different users communicate preferences in different ways. We propose applying the recent Interaction Grounded Learning (IGL) paradigm to address the challenge of learning representations of diverse user communication modalities. Rather than taking a fixed, human-designed reward function, IGL is able to learn personalized reward functions for different users and then optimize directly for the latent user satisfaction. We demonstrate the success of IGL with experiments using simulations as well as with real-world production traces.
translated by 谷歌翻译
Despite the popularity of Vision Transformers (ViTs) and eXplainable AI (XAI), only a few explanation methods have been proposed for ViTs thus far. They use attention weights of the classification token on patch embeddings and often produce unsatisfactory saliency maps. In this paper, we propose a novel method for explaining ViTs called ViT-CX. It is based on patch embeddings, rather than attentions paid to them, and their causal impacts on the model output. ViT-CX can be used to explain different ViT models. Empirical results show that, in comparison with previous methods, ViT-CX produces more meaningful saliency maps and does a better job at revealing all the important evidence for prediction. It is also significantly more faithful to the model as measured by deletion AUC and insertion AUC.
translated by 谷歌翻译
Channel Attention reigns supreme as an effective technique in the field of computer vision. However, the proposed channel attention by SENet suffers from information loss in feature learning caused by the use of Global Average Pooling (GAP) to represent channels as scalars. Thus, designing effective channel attention mechanisms requires finding a solution to enhance features preservation in modeling channel inter-dependencies. In this work, we utilize Wavelet transform compression as a solution to the channel representation problem. We first test wavelet transform as an Auto-Encoder model equipped with conventional channel attention module. Next, we test wavelet transform as a standalone channel compression method. We prove that global average pooling is equivalent to the recursive approximate Haar wavelet transform. With this proof, we generalize channel attention using Wavelet compression and name it WaveNet. Implementation of our method can be embedded within existing channel attention methods with a couple of lines of code. We test our proposed method using ImageNet dataset for image classification task. Our method outperforms the baseline SENet, and achieves the state-of-the-art results. Our code implementation is publicly available at https://github.com/hady1011/WaveNet-C.
translated by 谷歌翻译
是否可以在深网络中重组非线性激活函数以创建硬件有效的模型?为了解决这个问题,我们提出了一个称为重组激活网络(RANS)的新范式,该范式操纵模型中的非线性数量以提高其硬件意识和效率。首先,我们提出了RAN-STHICER(RAN-E) - 一个新的硬件感知搜索空间和半自动搜索算法 - 用硬件感知的块替换效率低下的块。接下来,我们提出了一种称为RAN-IMPLICIC(RAN-I)的无训练模型缩放方法,从理论上讲,我们在非线性单元的数量方面证明了网络拓扑与其表现性之间的联系。我们证明,我们的网络在不同尺度和几种类型的硬件上实现最新的成像网结果。例如,与有效网络-lite-B0相比,RAN-E在ARM Micro-NPU上每秒(FPS)提高了1.5倍,同时提高了类似的精度。另一方面,ran-i以相似或更好的精度表现出#macs的#macs降低2倍。我们还表明,在基于ARM的数据中心CPU上,RAN-I的FPS比Convnext高40%。最后,与基于Convnext的模型相比,基于RAN-I的对象检测网络在数据中心CPU上获得了类似或更高的映射,并且在数据中心CPU上的fps高达33%。
translated by 谷歌翻译
随着基于人工智能(AI)和机器学习(ML)技术的实用性的增长,对抗性攻击的威胁越来越大。有必要将这个生态系统的团队红色团结起来,以确定系统漏洞,潜在威胁,表征将增强系统鲁棒性并鼓励创造有效防御的属性。次要的需求是在不同的利益相关者,模型开发人员,用户和AI/ML安全专业人员等不同的利益相关者之间分享此AI安全威胁情报。在本文中,我们创建并描述了原型系统CTI4AI,以克服有条不紊地识别和共享AI/ML特定漏洞和威胁智能的需求。
translated by 谷歌翻译
我在本文中提出的想法是一种基于从人工神经网络操作中提取的指导和无方向规则的综合功能。
translated by 谷歌翻译